38 research outputs found

    Under-five mortality: spatial-temporal clusters in Ifakara HDSS in South-eastern Tanzania.

    Get PDF
    BACKGROUND\ud \ud Childhood mortality remains an important subject, particularly in sub-Saharan Africa where levels are still unacceptably high. To achieve the set Millennium Development Goals 4, calls for comprehensive application of the proven cost-effective interventions. Understanding spatial clustering of childhood mortality can provide a guide in targeting the interventions in a more strategic approach to the population where mortality is highest and the interventions are most likely to make an impact.\ud \ud METHODS\ud \ud Annual child mortality rates were calculated for each village, using person-years observed as the denominator. Kulldorff's spatial scan statistic was used for the identification and testing of childhood mortality clusters. All under-five deaths that occurred within a 10-year period from 1997 to 2006 were included in the analysis. Villages were used as units of clusters; all 25 health and demographic surveillance sites (HDSS) villages in the Ifakara health and demographic surveillance area were included.\ud \ud RESULTS\ud \ud Of the 10 years of analysis, statistically significant spatial clustering was identified in only 2 years (1998 and 2001). In 1998, the statistically significant cluster (p < 0.01) was composed of nine villages. A total of 106 childhood deaths were observed against an expected 77.3. The other statistically significant cluster (p < 0.05) identified in 2001 was composed of only one village. In this cluster, 36 childhood deaths were observed compared to 20.3 expected. Purely temporal analysis indicated that the year 2003 was a significant cluster (p < 0.05). Total deaths were 393 and expected were 335.8. Spatial-temporal analysis showed that nine villages were identified as statistically significant clusters (p < 0.05) for the period covering January 1997-December 1998. Total observed deaths in this cluster were 205 while 150.7 were expected.\ud \ud CONCLUSION\ud \ud There is evidence of spatial clustering in childhood mortality within the Ifakara HDSS. Further investigations are needed to explore the source of clustering and identify strategies of reaching the cluster population with the existing effective interventions. However, that should happen alongside delivery of interventions to the broader population

    Improving pneumonia case-management in Benin: a randomized trial of a multi-faceted intervention to support health worker adherence to Integrated Management of Childhood Illness guidelines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Pneumonia is a leading cause of death among children under five years of age. The Integrated Management of Childhood Illness strategy can improve the quality of care for pneumonia and other common illnesses in developing countries, but adherence to these guidelines could be improved. We evaluated an intervention in Benin to support health worker adherence to the guidelines after training, focusing on pneumonia case management.</p> <p>Methods</p> <p>We conducted a randomized trial. After a health facility survey in 1999 to assess health care quality before Integrated Management of Childhood Illness training, health workers received training plus either study supports (job aids, non-financial incentives and supervision of workers and supervisors) or "usual" supports. Follow-up surveys were conducted in 2001, 2002 and 2004. Outcomes were indicators of health care quality for Integrated Management-defined pneumonia. Further analyses included a graphical pathway analysis and multivariable logistic regression modelling to identify factors influencing case-management quality.</p> <p>Results</p> <p>We observed 301 consultations of children with non-severe pneumonia that were performed by 128 health workers in 88 public and private health facilities. Although outcomes improved in both intervention and control groups, we found no statistically significant difference between groups. However, training proceeded slowly, and low-quality care from untrained health workers diluted intervention effects. Per-protocol analyses suggested that health workers with training plus study supports performed better than those with training plus usual supports (20.4 and 19.2 percentage-point improvements for recommended treatment [p = 0.08] and "recommended or adequate" treatment [p = 0.01], respectively). Both groups tended to perform better than untrained health workers. Analyses of treatment errors revealed that incomplete assessment and difficulties processing clinical findings led to missed pneumonia diagnoses, and missed diagnoses led to inadequate treatment. Increased supervision frequency was associated with better care (odds ratio for recommended treatment = 2.1 [95% confidence interval: 1.13.9] per additional supervisory visit).</p> <p>Conclusion</p> <p>Integrated Management of Childhood Illness training was useful, but insufficient, to achieve high-quality pneumonia case management. Our study supports led to additional improvements, although large gaps in performance still remained. A simple graphical pathway analysis can identify specific, common errors that health workers make in the case-management process; this information could be used to target quality improvement activities, such as supervision (ClinicalTrials.gov number NCT00510679).</p

    Malaria risk and access to prevention and treatment in the paddies of the Kilombero Valley, Tanzania

    Get PDF
    Background: The Kilombero Valley is a highly malaria-endemic agricultural area in south-eastern Tanzania. Seasonal flooding of the valley is favourable to malaria transmission. During the farming season, many households move to distant field sites (shamba in Swahili) in the fertile river floodplain for the cultivation of rice. In the shamba, people live for several months in temporary shelters, far from the nearest health services. This study assessed the impact of seasonal movements to remote fields on malaria risk and treatment-seeking behaviour. Methods: A longitudinal study followed approximately 100 randomly selected farming households over six months. Every household was visited monthly and whereabouts of household members, activities in the fields, fever cases and treatment seeking for recent fever episodes were recorded. Results: Fever incidence rates were lower in the shamba compared to the villages and moving to the shamba did not increase the risk of having a fever episode. Children aged 1-4 years, who usually spend a considerable amount of time in the shamba with their caretakers, were more likely to have a fever than adults (odds ratio = 4.47, 95 confidence interval 2.35-8.51). Protection with mosquito nets in the fields was extremely good (98 antimalarials was uncommon. Despite the long distances to health services, 55.8 health facility, while home-management was less common (37 17.4-50.5). Conclusion: Living in the shamba does not appear to result in a higher fever-risk. Mosquito nets usage and treatment of fever in health facilities reflect awareness of malaria. Inability to obtain drugs in the fields may contribute to less irrational use of drugs but may pose an additional burden on poor farming households. A comprehensive approach is needed to improve access to treatment while at the same time assuring rational use of medicines and protecting fragile livelihoods

    Decreased availability of antimalarials in the private sector following the policy change from chloroquine to sulphadoxine-pyrimethamine in the Kilombero Valley, Tanzania

    Get PDF
    BACKGROUND: Malaria control strategies emphasize the need for prompt and effective treatment of malaria episodes. To increase treatment efficacy, Tanzania changed its first-line treatment from chloroquine to sulphadoxine-pyrimethamine (SP) in 2001. The effect of this policy change on the availability of antimalarials was studied in rural south-eastern Tanzania. METHODS: In 2001 and 2004, the study area was searched for commercial outlets selling drugs and their stocks were recorded. Household information was obtained from the local Demographic Surveillance System. RESULTS: From 2001 to 2004, the number of general shops stocking drugs increased by 15% and the number of drug stores nearly doubled. However, the proportion of general shops stocking antimalarials dropped markedly, resulting in an almost 50% decrease of antimalarial selling outlets. This led to more households being located farther from a treatment source. In 2004, five out of 25 studied villages with a total population of 13,506 (18%) had neither a health facility, nor a shop as source of malaria treatment. CONCLUSION: While the change to SP resulted in a higher treatment efficacy, it also led to a decreased antimalarial availability in the study area. Although there was no apparent impact on overall antimalarial use, the decline in access may have disproportionately affected the poorest and most remote groups. In view of the imminent policy change to artemisinin-based combination therapy these issues need to be addressed urgently if the benefits of this new class of antimalarials are to be extended to the whole population

    Obstacles to prompt and effective malaria treatment lead to low community-coverage in two rural districts of Tanzania

    Get PDF
    BACKGROUND\ud \ud Malaria is still a leading child killer in sub-Saharan Africa. Yet, access to prompt and effective malaria treatment, a mainstay of any malaria control strategy, is sub-optimal in many settings. Little is known about obstacles to treatment and community-effectiveness of case-management strategies. This research quantified treatment seeking behaviour and access to treatment in a highly endemic rural Tanzanian community. The aim was to provide a better understanding of obstacles to treatment access in order to develop practical and cost-effective interventions.\ud \ud METHODS\ud \ud We conducted community-based treatment-seeking surveys including 226 recent fever episodes in 2004 and 2005. The local Demographic Surveillance System provided additional household information. A census of drug retailers and health facilities provided data on availability and location of treatment sources.\ud \ud RESULTS\ud \ud After intensive health education, the biomedical concept of malaria has largely been adopted by the community. 87.5% (78.2-93.8) of the fever cases in children and 80.7% (68.1-90.0) in adults were treated with one of the recommended antimalarials (at the time SP, amodiaquine or quinine). However, only 22.5% (13.9-33.2) of the children and 10.5% (4.0-21.5) of the adults received prompt and appropriate antimalarial treatment. Health facility attendance increased the odds of receiving an antimalarial (OR = 7.7) but did not have an influence on correct dosage. The exemption system for under-fives in public health facilities was not functioning and drug expenditures for children were as high in health facilities as with private retailers.\ud \ud CONCLUSION\ud \ud A clear preference for modern medicine was reflected in the frequent use of antimalarials. Yet, quality of case-management was far from satisfactory as was the functioning of the exemption mechanism for the main risk group. Private drug retailers played a central role by complementing existing formal health services in delivering antimalarial treatment. Health system factors like these need to be tackled urgently in order to translate the high efficacy of newly introduced artemisinin-based combination therapy (ACT) into equitable community-effectiveness and health-impact

    Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement

    Full text link
    The final publication is available at Springer via http://dx.doi.org/ 10.1007/s10462-016-9505-7.The evaluation of artificial intelligence systems and components is crucial for the progress of the discipline. In this paper we describe and critically assess the different ways AI systems are evaluated, and the role of components and techniques in these systems. We first focus on the traditional task-oriented evaluation approach. We identify three kinds of evaluation: human discrimination, problem benchmarks and peer confrontation. We describe some of the limitations of the many evaluation schemes and competitions in these three categories, and follow the progression of some of these tests. We then focus on a less customary (and challenging) ability-oriented evaluation approach, where a system is characterised by its (cognitive) abilities, rather than by the tasks it is designed to solve. We discuss several possibilities: the adaptation of cognitive tests used for humans and animals, the development of tests derived from algorithmic information theory or more integrated approaches under the perspective of universal psychometrics. We analyse some evaluation tests from AI that are better positioned for an ability-oriented evaluation and discuss how their problems and limitations can possibly be addressed with some of the tools and ideas that appear within the paper. Finally, we enumerate a series of lessons learnt and generic guidelines to be used when an AI evaluation scheme is under consideration.I thank the organisers of the AEPIA Summer School On Artificial Intelligence, held in September 2014, for giving me the opportunity to give a lecture on 'AI Evaluation'. This paper was born out of and evolved through that lecture. The information about many benchmarks and competitions discussed in this paper have been contrasted with information from and discussions with many people: M. Bedia, A. Cangelosi, C. Dimitrakakis, I. GarcIa-Varea, Katja Hofmann, W. Langdon, E. Messina, S. Mueller, M. Siebers and C. Soares. Figure 4 is courtesy of F. Martinez-Plumed. Finally, I thank the anonymous reviewers, whose comments have helped to significantly improve the balance and coverage of the paper. This work has been partially supported by the EU (FEDER) and the Spanish MINECO under Grants TIN 2013-45732-C4-1-P, TIN 2015-69175-C4-1-R and by Generalitat Valenciana PROMETEOII2015/013.José Hernández-Orallo (2016). Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement. Artificial Intelligence Review. 1-51. https://doi.org/10.1007/s10462-016-9505-7S151Abel D, Agarwal A, Diaz F, Krishnamurthy A, Schapire RE (2016) Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119Adams S, Arel I, Bach J, Coop R, Furlan R, Goertzel B, Hall JS, Samsonovich A, Scheutz M, Schlesinger M, Shapiro SC, Sowa J (2012) Mapping the landscape of human-level artificial general intelligence. AI Mag 33(1):25–42Adams SS, Banavar G, Campbell M (2016) I-athlon: towards a multi-dimensional Turing test. AI Mag 37(1):78–84Alcalá J, Fernández A, Luengo J, Derrac J, García S, Sánchez L, Herrera F (2010) Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J Mult Valued Logic Soft Comput 17:255–287Alexander JRM, Smales S (1997) Intelligence, learning and long-term memory. Personal Individ Differ 23(5):815–825Alpcan T, Everitt T, Hutter M (2014) Can we measure the difficulty of an optimization problem? In: IEEE information theory workshop (ITW)Alur R, Bodik R, Juniwal G, Martin MMK, Raghothaman M, Seshia SA, Singh R, Solar-Lezama A, Torlak E, Udupa A (2013) Syntax-guided synthesis. In: Formal methods in computer-aided design (FMCAD), 2013, IEEE, pp 1–17Alvarado N, Adams SS, Burbeck S, Latta C (2002) Beyond the Turing test: performance metrics for evaluating a computer simulation of the human mind. In: Proceedings of the 2nd international conference on development and learning, IEEE, pp 147–152Amigoni F, Bastianelli E, Berghofer J, Bonarini A, Fontana G, Hochgeschwender N, Iocchi L, Kraetzschmar G, Lima P, Matteucci M, Miraldo P, Nardi D, Schiaffonati V (2015) Competitions for benchmarking: task and functionality scoring complete performance assessment. IEEE Robot Autom Mag 22(3):53–61Anderson J, Lebiere C (2003) The Newell test for a theory of cognition. Behav Brain Sci 26(5):587–601Anderson J, Baltes J, Cheng CT (2011) Robotics competitions as benchmarks for AI research. Knowl Eng Rev 26(01):11–17Arel I, Rose DC, Karnowski TP (2010) Deep machine learning—a new frontier in artificial intelligence research. IEEE Comput Intell Mag 5(4):13–18Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C (2009) Cognitive developmental robotics: a survey. IEEE Trans Auton Ment Dev 1(1):12–34Aziz H, Brill M, Fischer F, Harrenstein P, Lang J, Seedig HG (2015) Possible and necessary winners of partial tournaments. J Artif Intell Res 54:493–534Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/mlBagnall AJ, Zatuchna ZV (2005) On the classification of maze problems. In: Bull L, Kovacs T (eds) Foundations of learning classifier system. Studies in fuzziness and soft computing, vol. 183, Springer, pp 305–316. http://rd.springer.com/chapter/10.1007/11319122_12Baldwin D, Yadav SB (1995) The process of research investigations in artificial intelligence - a unified view. IEEE Trans Syst Man Cybern 25(5):852–861Bellemare MG, Naddaf Y, Veness J, Bowling M (2013) The arcade learning environment: an evaluation platform for general agents. J Artif Intell Res 47:253–279Besold TR (2014) A note on chances and limitations of psychometric ai. In: KI 2014: advances in artificial intelligence. Springer, pp 49–54Biever C (2011) Ultimate IQ: one test to rule them all. New Sci 211(2829, 10 September 2011):42–45Borg M, Johansen SS, Thomsen DL, Kraus M (2012) Practical implementation of a graphics Turing test. In: Advances in visual computing. Springer, pp 305–313Boring EG (1923) Intelligence as the tests test it. New Repub 35–37Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordBrazdil P, Carrier CG, Soares C, Vilalta R (2008) Metalearning: applications to data mining. Springer, New YorkBringsjord S (2011) Psychometric artificial intelligence. J Exp Theor Artif Intell 23(3):271–277Bringsjord S, Schimanski B (2003) What is artificial intelligence? Psychometric AI as an answer. In: International joint conference on artificial intelligence, pp 887–893Brundage M (2016) Modeling progress in ai. AAAI 2016 Workshop on AI, Ethics, and SocietyBuchanan BG (1988) Artificial intelligence as an experimental science. Springer, New YorkBuhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5Bursztein E, Aigrain J, Moscicki A, Mitchell JC (2014) The end is nigh: generic solving of text-based captchas. In: Proceedings of the 8th USENIX conference on Offensive Technologies, USENIX Association, p 3Campbell M, Hoane AJ, Hsu F (2002) Deep Blue. Artif Intell 134(1–2):57–83Cangelosi A, Schlesinger M, Smith LB (2015) Developmental robotics: from babies to robots. MIT Press, CambridgeCaputo B, Müller H, Martinez-Gomez J, Villegas M, Acar B, Patricia N, Marvasti N, Üsküdarlı S, Paredes R, Cazorla M et al (2014) Imageclef 2014: overview and analysis of the results. In: Information access evaluation. Multilinguality, multimodality, and interaction, Springer, pp 192–211Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER Jr, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: AAAI, vol 5, p 3Carroll JB (1993) Human cognitive abilities: a survey of factor-analytic studies. Cambridge University Press, CambridgeCaruana R (1997) Multitask learning. Mach Learn 28(1):41–75Chaitin GJ (1982) Gödel’s theorem and information. Int J Theor Phys 21(12):941–954Chandrasekaran B (1990) What kind of information processing is intelligence? In: The foundation of artificial intelligence—a sourcebook. Cambridge University Press, pp 14–46Chater N (1999) The search for simplicity: a fundamental cognitive principle? Q J Exp Psychol Sect A 52(2):273–302Chater N, Vitányi P (2003) Simplicity: a unifying principle in cognitive science? Trends Cogn Sci 7(1):19–22Chu Z, Gianvecchio S, Wang H, Jajodia S (2010) Who is tweeting on twitter: human, bot, or cyborg? In: Proceedings of the 26th annual computer security applications conference, ACM, pp 21–30Cochran WG (2007) Sampling techniques. Wiley, New YorkCohen PR, Howe AE (1988) How evaluation guides AI research: the message still counts more than the medium. AI Mag 9(4):35Cohen Y (2013) Testing and cognitive enhancement. Technical repor, National Institute for Testing and Evaluation, Jerusalem, IsraelConrad JG, Zeleznikow J (2013) The significance of evaluation in AI and law: a case study re-examining ICAIL proceedings. In: Proceedings of the 14th international conference on artificial intelligence and law, ACM, pp 186–191Conrad JG, Zeleznikow J (2015) The role of evaluation in ai and law. In: Proceedings of the 15th international conference on artificial intelligence and law, pp 181–186Deary IJ, Der G, Ford G (2001) Reaction times and intelligence differences: a population-based cohort study. Intelligence 29(5):389–399Decker KS, Durfee EH, Lesser VR (1989) Evaluating research in cooperative distributed problem solving. Distrib Artif Intell 2:487–519Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30Detterman DK (2011) A challenge to Watson. Intelligence 39(2–3):77–78Dimitrakakis C (2016) Personal communicationDimitrakakis C, Li G, Tziortziotis N (2014) The reinforcement learning competition 2014. AI Mag 35(3):61–65Dowe DL (2013) Introduction to Ray Solomonoff 85th memorial conference. In: Dowe DL (ed) Algorithmic probability and friends. Bayesian prediction and artificial intelligence, lecture notes in computer science, vol 7070. Springer, Berlin, pp 1–36Dowe DL, Hajek AR (1997) A computational extension to the Turing Test. In: Proceedings of the 4th conference of the Australasian cognitive science society, University of Newcastle, NSW, AustraliaDowe DL, Hajek AR (1998) A non-behavioural, computational extension to the Turing test. In: International conference on computational intelligence and multimedia applications (ICCIMA’98), Gippsland, Australia, pp 101–106Dowe DL, Hernández-Orallo J (2012) IQ tests are not for machines, yet. Intelligence 40(2):77–81Dowe DL, Hernández-Orallo J (2014) How universal can an intelligence test be? Adapt Behav 22(1):51–69Drummond C (2009) Replicability is not reproducibility: nor is it good science. In: Proceedings of the evaluation methods for machine learning workshop at the 26th ICML, Montreal, CanadaDrummond C, Japkowicz N (2010) Warning: statistical benchmarking is addictive. Kicking the habit in machine learning. J Exp Theor Artif Intell 22(1):67–80Duan Y, Chen X, Houthooft R, Schulman J, Abbeel P (2016) Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778Eden AH, Moor JH, Soraker JH, Steinhart E (2013) Singularity hypotheses: a scientific and philosophical assessment. Springer, New YorkEdmondson W (2012) The intelligence in ETI—what can we know? Acta Astronaut 78:37–42Elo AE (1978) The rating of chessplayers, past and present, vol 3. Batsford, LondonEmbretson SE, Reise SP (2000) Item response theory for psychologists. L. Erlbaum, HillsdaleEvans JM, Messina ER (2001) Performance metrics for intelligent systems. NIST Special Publication SP, pp 101–104Everitt T, Lattimore T, Hutter M (2014) Free lunch for optimisation under the universal distribution. In: 2014 IEEE Congress on evolutionary computation (CEC), IEEE, pp 167–174Falkenauer E (1998) On method overfitting. J Heuristics 4(3):281–287Feldman J (2003) Simplicity and complexity in human concept learning. Gen Psychol 38(1):9–15Ferrando PJ (2009) Difficulty, discrimination, and information indices in the linear factor analysis model for continuous item responses. Appl Psychol Meas 33(1):9–24Ferrando PJ (2012) Assessing the discriminating power of item and test scores in the linear factor-analysis model. Psicológica 33:111–139Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recogn Lett 30(1):27–38Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D, Kalyanpur AA, Lally A, Murdock J, Nyberg E, Prager J et al (2010) Building Watson: an overview of the DeepQA project. AI Mag 31(3):59–79Fogel DB (1991) The evolution of intelligent decision making in gaming. Cybern Syst 22(2):223–236Gaschnig J, Klahr P, Pople H, Shortliffe E, Terry A (1983) Evaluation of expert systems: issues and case studies. Build Exp Syst 1:241–278Geissman JR, Schultz RD (1988) Verification & validation. AI Exp 3(2):26–33Genesereth M, Love N, Pell B (2005) General game playing: overview of the AAAI competition. AI Mag 26(2):62Gerónimo D, López AM (2014) Datasets and benchmarking. In: Vision-based pedestrian protection systems for intelligent vehicles. Springer, pp 87–93Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence. Springer, New YorkGoertzel B, Arel I, Scheutz M (2009) Toward a roadmap for human-level artificial general intelligence: embedding HLAI systems in broad, approachable, physical or virtual contexts. Artif Gen Intell Roadmap InitiatGoldreich O, Vadhan S (2007) Special issue on worst-case versus average-case complexity editors’ foreword. Comput complex 16(4):325–330Gordon BB (2007) Report on panel discussion on (re-)establishing or increasing collaborative links between artificial intelligence and intelligent systems. In: Messina ER, Madhavan R (eds) Proceedings of the 2007 workshop on performance metrics for intelligent systems, pp 302–303Gulwani S, Hernández-Orallo J, Kitzelmann E, Muggleton SH, Schmid U, Zorn B (2015) Inductive programming meets the real world. Commun ACM 58(11):90–99Hand DJ (2004) Measurement theory and practice. A Hodder Arnold Publication, LondonHernández-Orallo J (2000a) Beyond the Turing test. J Logic Lang Inf 9(4):447–466Hernández-Orallo J (2000b) On the computational measurement of intelligence factors. In: Meystel A (ed) Performance metrics for intelligent systems workshop. National Institute of Standards and Technology, Gaithersburg, pp 1–8Hernández-Orallo J (2000c) Thesis: computational measures of information gain and reinforcement in inference processes. AI Commun 13(1):49–50Hernández-Orallo J (2010) A (hopefully) non-biased universal environment class for measuring intelligence of biological and artificial systems. In: Artificial general intelligence, 3rd International Conference. Atlantis Press, Extended report at http://users.dsic.upv.es/proy/anynt/unbiased.pdf , pp 182–183Hernández-Orallo J (2014) On environment difficulty and discriminating power. Auton Agents Multi-Agent Syst. 29(3):402–454. doi: 10.1007/s10458-014-9257-1Hernández-Orallo J, Dowe DL (2010) Measuring universal intelligence: towards an anytime intelligence test. Artif Intell 174(18):1508–1539Hernández-Orallo J, Dowe DL (2013) On potential cognitive abilities in the machine kingdom. Minds Mach 23:179–210Hernández-Orallo J, Minaya-Collado N (1998) A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proceedings of international symposium of engineering of intelligent systems (EIS’98), ICSC Press, pp 146–163Hernández-Orallo J, Dowe DL, España-Cubillo S, Hernández-Lloreda MV, Insa-Cabrera J (2011) On more realistic environment distributions for defining, evaluating and developing intelligence. In: Schmidhuber J, Thórisson K, Looks M (eds) Artificial general intelligence, LNAI, vol 6830. Springer, New York, pp 82–91Hernández-Orallo J, Flach P, Ferri C (2012a) A unified view of performance metrics: translating threshold choice into expected classification loss. J Mach Learn Res 13(1):2813–2869Hernández-Orallo J, Insa-Cabrera J, Dowe DL, Hibbard B (2012b) Turing Tests with Turing machines. In: Voronkov A (ed) Turing-100, EPiC Series, vol 10, pp 140–156Hernández-Orallo J, Dowe DL, Hernández-Lloreda MV (2014) Universal psychometrics: measuring cognitive abilities in the machine kingdom. Cogn Syst Res 27:50–74Hernández-Orallo J, Martínez-Plumed F, Schmid U, Siebers M, Dowe DL (2016) Computer models solving intelligence test problems: progress and implications. Artif Intell 230:74–107Herrmann E, Call J, Hernández-Lloreda MV, Hare B, Tomasello M (2007) Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis. Science 317(5843):1360–1366Hibbard B (2009) Bias and no free lunch in formal measures of intelligence. J Artif Gen Intell 1(1):54–61Hingston P (2010) A new design for a Turing Test for bots. In: 2010 IEEE symposium on computational intelligence and games (CIG), IEEE, pp 345–350Hingston P (2012) Believable bots: can computers play like people?. Springer, New YorkHo TK, Basu M (2002) Complexity measures of supervised classification problems. IEEE Trans Pattern Anal Mach Intell 24(3):289–300Hutter M (2007) Universal algorithmic intelligence: a mathematical top \rightarrow → down approach. In: Goertzel B, Pennachin C (eds) Artificial general intelligence, cognitive technologies. Springer, Berlin, pp 227–290Igel C, Toussaint M (2005) A no-free-lunch theorem for non-uniform distributions of target functions. J Math Model Algorithms 3(4):313–322Insa-Cabrera J (2016) Towards a universal test of social intelligence. Ph.D. thesis, Departament de Sistemes Informátics i Computació, UPVInsa-Cabrera J, Dowe DL, España-Cubillo S, Hernández-Lloreda MV, Hernández-Orallo J (2011a) Comparing humans and ai agents. In: Schmidhuber J, Thórisson K, Looks M (eds) Artificial general intelligence, LNAI, vol 6830. Springer, New York, pp 122–132Insa-Cabrera J, Dowe DL, Hernández-Orallo J (2011) Evaluating a reinforcement learning algorithm with a general intelligence test. In: Lozano JA, Gamez JM (eds) Current topics in artificial intelligence. CAEPIA 2011, LNAI series 7023. Springer, New YorkInsa-Cabrera J, Benacloch-Ayuso JL, Hernández-Orallo J (2012) On measuring social intelligence: experiments on competition and cooperation. In: Bach J, Goertzel B, Iklé M (eds) AGI, lecture notes in computer science, vol 7716. Springer, New York, pp 126–135Jacoff A, Messina E, Weiss BA, Tadokoro S, Nakagawa Y (2003) Test arenas and performance metrics for urban search and rescue robots. In: Proceedings of 2003 IEEE/RSJ international conference on intelligent robots and systems, 2003 (IROS 2003), IEEE, vol 4, pp 3396–3403Japkowicz N, Shah M (2011) Evaluating learning algorithms. Cambridge University Press, CambridgeJiang J (2008) A literature survey on domain adaptation of statistical classifiers. http://sifaka.cs.uiuc.edu/jiang4/domain_adaptation/surveyJohnson M, Hofmann K, Hutton T, Bignell D (2016) The Malmo platform for artificial intelligence experimentation. In: International joint conference on artificial intelligence (IJCAI)Keith TZ, Reynolds MR (2010) Cattell–Horn–Carroll abilities and cognitive tests: what we’ve learned from 20 years of research. Psychol Schools 47(7):635–650Ketter W, Symeonidis A (2012) Competitive benchmarking: lessons learned from the trading agent competition. AI Mag 33(2):103Khreich W, Granger E, Miri A, Sabourin R (2012) A survey of techniques for incremental learning of HMM parameters. Inf Sci 197:105–130Kim JH (2004) Soccer robotics, vol 11. Springer, New YorkKitano H, Asada M, Kuniyoshi Y, Noda I, Osawa E (1997) Robocup: the robot world cup initiative. In: Proceedings of the first international conference on autonomous agents, ACM, pp 340–347Kleiner K (2011) Who are you calling bird-brained? An attempt is being made to devise a universal intelligence test. Economist 398(8723, 5 March 2011):82Knuth DE (1973) Sorting and searching, volume 3 of the art of computer programming. Addison-Wesley, ReadingKoza JR (2010) Human-competitive results produced by genetic programming. Genet Program Evolvable Mach 11(3–4):251–284Krueger J, Osherson D (1980) On the psychology of structural simplicity. In: Jusczyk PW, Klein RM (eds) The nature of thought: essays in honor of D. O. Hebb. Psychology Press, London, pp 187–205Langford J (2005) Clever methods of overfitting. Machine Learning (Theory). http://hunch.netLangley P (1987) Research papers in machine learning. Mach Learn 2(3):195–198Langley P (2011) The changing science of machine learning. Mach Learn 82(3):275–279Langley P (2012) The cognitive systems paradigm. Adv Cogn Syst 1:3–13Lattimore T, Hutter M (2013) No free lunch versus Occam’s razor in supervised learning. Algorithmic Probability and Friends. Springer, Bayesian Prediction and Artificial Intelligence, pp 223–235Leeuwenberg ELJ, Van Der Helm PA (2012) Structural information theory: the simplicity of visual form. Cambridge University Press, CambridgeLegg S, Hutter M (2007a) Tests of machine intelligence. In: Lungarella M, Iida F, Bongard J, Pfeifer R (eds) 50 Years of Artificial Intelligence, Lecture Notes in Computer Science, vol 4850, Springer Berlin Heidelberg, pp 232–242. doi: 10.1007/978-3-540-77296-5_22Legg S, Hutter M (2007b) Universal intelligence: a definition of machine intelligence. Minds Mach 17(4):391–444Legg S, Veness J (2013) An approximation of the universal intelligence measure. Algorithmic Probability and Friends. Springer, Bayesian Prediction and Artificial Intelligence, pp 236–249Levesque HJ (2014) On our best behaviour. Artif Intell 212:27–35Levesque HJ, Davis E, Morgenstern L (2012) The winog
    corecore